Cashews are grown by over 3 million smallholders in more than 40 countries worldwide as a principal source of income. As the third largest cashew producer in Africa, Benin has nearly 200,000 smallholder cashew growers contributing 15% of the country's national export earnings. However, a lack of information on where and how cashew trees grow across the country hinders decision-making that could support increased cashew production and poverty alleviation. By leveraging 2.4-m Planet Basemaps and 0.5-m aerial imagery, newly developed deep learning algorithms, and large-scale ground truth datasets, we successfully produced the first national map of cashew in Benin and characterized the expansion of cashew plantations between 2015 and 2021. In particular, we developed a SpatioTemporal Classification with Attention (STCA) model to map the distribution of cashew plantations, which can fully capture texture information from discriminative time steps during a growing season. We further developed a Clustering Augmented Self-supervised Temporal Classification (CASTC) model to distinguish high-density versus low-density cashew plantations by automatic feature extraction and optimized clustering. Results show that the STCA model has an overall accuracy of 80% and the CASTC model achieved an overall accuracy of 77.9%. We found that the cashew area in Benin has doubled from 2015 to 2021 with 60% of new plantation development coming from cropland or fallow land, while encroachment of cashew plantations into protected areas has increased by 70%. Only half of cashew plantations were high-density in 2021, suggesting high potential for intensification. Our study illustrates the power of combining high-resolution remote sensing imagery and state-of-the-art deep learning algorithms to better understand tree crops in the heterogeneous smallholder landscape.
translated by 谷歌翻译
Graph Neural Networks (GNNs), originally proposed for node classification, have also motivated many recent works on edge prediction (a.k.a., link prediction). However, existing methods lack elaborate design regarding the distinctions between two tasks that have been frequently overlooked: (i) edges only constitute the topology in the node classification task but can be used as both the topology and the supervisions (i.e., labels) in the edge prediction task; (ii) the node classification makes prediction over each individual node, while the edge prediction is determinated by each pair of nodes. To this end, we propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE). Concretely, we first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision (named as topology edge or supervision edge). We then develop a new message passing mechanism that generates the messages to source nodes (through topology edges) being aware of target nodes (through supervision edges). In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences. In addition, we design a novel negative node-pair sampling trick that efficiently samples 'hard' negative instances in the supervision instances, and can significantly improve the performance. Experimental results verify that the proposed method can significantly outperform existing state-of-the-art models regarding the edge prediction task on multiple homogeneous and heterogeneous graph datasets.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Geometric camera calibration is often required for applications that understand the perspective of the image. We propose perspective fields as a representation that models the local perspective properties of an image. Perspective Fields contain per-pixel information about the camera view, parameterized as an up vector and a latitude value. This representation has a number of advantages as it makes minimal assumptions about the camera model and is invariant or equivariant to common image editing operations like cropping, warping, and rotation. It is also more interpretable and aligned with human perception. We train a neural network to predict Perspective Fields and the predicted Perspective Fields can be converted to calibration parameters easily. We demonstrate the robustness of our approach under various scenarios compared with camera calibration-based methods and show example applications in image compositing.
translated by 谷歌翻译
如今,随着发现的OSS漏洞的数量,开源软件(OSS)漏洞管理流程随着时间的流逝而增加。监视漏洞固定提交是防止脆弱性开发的标准过程的一部分。但是,由于可能有大量的审查,手动检测漏洞固定的犯罪是耗时的。最近,已经提出了许多技术来自动检测使用机器学习的漏洞固定提交。这些解决方案要么:(1)不使用深度学习,或(2)仅对有限的信息来源使用深度学习。本文提出了藤条,该工具利用了更丰富的信息来源,包括提交消息,代码更改和针对漏洞固定的提交分类的报告。我们的实验结果表明,在F1得分方面,沃尔维尔剂的表现优于最先进的基线。 Vulcurator工具可在https://github.com/ntgiang71096/vfdetector和https://zenodo.org/record/7034132#.yw3mn-xbzdi上公开获得。
translated by 谷歌翻译
构建静态呼叫图需要在健全和精度之间进行权衡。不幸的是,用于构建呼叫图的程序分析技术通常不精确。为了解决这个问题,研究人员最近提出了通过机器学习为静态分析构建的后处理呼叫图所授权的呼叫图。机器学习模型的构建是为了通过在随机森林分类器中提取结构特征来捕获呼叫图中的信息。然后,它消除了预测为误报的边缘。尽管机器学习模型显示了改进,但它们仍然受到限制,因为它们不考虑源代码语义,因此通常无法有效地区分真实和误报。在本文中,我们提出了一种新颖的呼叫图修剪技术AutoRoprouner,用于通过统计语义和结构分析消除呼叫图中的假阳性。给定一个由传统静态分析工具构建的呼叫图,AutoProuner采用基于变压器的方法来捕获呼叫者与呼叫图中每个边缘相关的呼叫者和Callee函数之间的语义关系。为此,AutoProuner微型调节模型是在大型语料库上预先训练的代码模型,以根据其语义的描述表示源代码。接下来,该模型用于从与呼叫图中的每个边缘相关的功能中提取语义特征。 AutoProuner使用这些语义功能以及从呼叫图提取的结构特征通过馈送前向神经网络分类。我们在现实世界程序的基准数据集上进行的经验评估表明,AutoProuner的表现优于最先进的基线,从而改善了F量级,在识别静态呼叫图中识别错误阳性边缘方面,高达13%。
translated by 谷歌翻译
我们提出了一种从有限重叠的图像中对场景进行平面表面重建的方法。此重构任务是具有挑战性的,因为它需要共同推理单个图像3D重建,图像之间的对应关系以及图像之间的相对摄像头姿势。过去的工作提出了基于优化的方法。我们引入了一种更简单的方法,即平面形式,该方法使用应用于3D感知平面令牌的变压器执行3D推理。我们的实验表明,我们的方法比以前的工作更有效,并且几项3D特定的设计决策对于成功的成功至关重要。
translated by 谷歌翻译
对表格数据的预测是许多重要的下游任务中的必要和基本问题。但是,现有方法要么将表的数据实例独立作为输入而独立使用,要么不完全利用多排功能和标签来直接更改和增强目标数据表示。在本文中,我们建议1)从相关数据实例检索中构建一个超图,以建模这些实例的跨行和跨柱模式,以及2)执行消息传播以增强目标数据实例表示表格预测任务。具体而言,我们专门设计的消息传播步骤受益于1)在传播过程中融合标签和特征,以及2)局部感知的高阶特征交互。在两个重要的表格数据预测任务上进行的实验验证了所提出的PET模型与其他基线的优越性。此外,我们证明了模型组件的有效性以及通过各种消融研究和可视化的PET的特征增强能力。该代码包含在https://github.com/kounianhuadu/pet中。
translated by 谷歌翻译
Automatically generated static code warnings suffer from a large number of false alarms. Hence, developers only take action on a small percent of those warnings. To better predict which static code warnings should not be ignored, we suggest that analysts need to look deeper into their algorithms to find choices that better improve the particulars of their specific problem. Specifically, we show here that effective predictors of such warnings can be created by methods that locally adjust the decision boundary (between actionable warnings and others). These methods yield a new high water-mark for recognizing actionable static code warnings. For eight open-source Java projects (cassandra, jmeter, commons, lucene-solr, maven, ant, tomcat, derby) we achieve perfect test results on 4/8 datasets and, overall, a median AUC (area under the true negatives, true positives curve) of 92%.
translated by 谷歌翻译
虽然近期拍摄图像造型化的最近进步,但这些方法无法捕捉对人类显而易见的文体细节。诸如眼睛形状的细节,线的粗糙度,对于模型来说特别困难,特别是在有限的数据设置下。在这项工作中,我们的目的是执行一个拍摄的一拍图像风格化,以获得细节。给定参考样式图像,我们使用GaN反转和Finetune使用该近似配对数据来近似配对的实际数据。然后,我们鼓励风格终体概括,以便学习风格可以应用于所有其他图像。
translated by 谷歌翻译